Learning Mixtures of Low-Rank Models

نویسندگان

چکیده

We study the problem of learning mixtures low-rank models, i.e. reconstructing multiple matrices from unlabelled linear measurements each. This enriches two widely studied settings - matrix sensing and mixed regression by bringing latent variables (i.e. unknown labels) structural priors structures) into consideration. To cope with non-convexity issues arising heterogeneous data low-complexity structure, we develop a three-stage meta-algorithm that is guaranteed to recover near-optimal sample computational complexities under Gaussian designs. In addition, proposed algorithm provably stable against random noise. complement theoretical studies empirical evidence confirms efficacy our algorithm.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Greedy Learning of Generalized Low-Rank Models

Learning of low-rank matrices is fundamental to many machine learning applications. A state-ofthe-art algorithm is the rank-one matrix pursuit (R1MP). However, it can only be used in matrix completion problems with the square loss. In this paper, we develop a more flexible greedy algorithm for generalized low-rank models whose optimization objective can be smooth or nonsmooth, general convex or...

متن کامل

Unifying Low-Rank Models for Visual Learning

Many problems in signal processing, machine learning and computer vision can be solved by learning low rank models from data. In computer vision, problems such as rigid structure from motion have been formulated as an optimization over subspaces with fixed rank. These hard -rank constraints have traditionally been imposed by a factorization that parameterizes subspaces as a product of two matri...

متن کامل

Low-rank Bandits with Latent Mixtures

We study the task of maximizing rewards from recommending items (actions) to users sequentially interacting with a recommender system. Users are modeled as latent mixtures of C many representative user classes, where each class specifies a mean reward profile across actions. Both the user features (mixture distribution over classes) and the item features (mean reward vector per class) are unkno...

متن کامل

Learning of Generalized Low-Rank Models: A Greedy Approach

Learning of low-rank matrices is fundamental to many machine learning applications. A state-of-the-art algorithm is the rank-one matrix pursuit (R1MP). However, it can only be used in matrix completion problems with the square loss. In this paper, we develop a more flexible greedy algorithm for generalized low-rank models whose optimization objective can be smooth or nonsmooth, general convex o...

متن کامل

Generalized Low Rank Models

Principal components analysis (PCA) is a well-known technique for approximating a data set represented by a matrix by a low rank matrix. Here, we extend the idea of PCA to handle arbitrary data sets consisting of numerical, Boolean, categorical, ordinal, and other data types. This framework encompasses many well known techniques in data analysis, such as nonnegative matrix factorization, matrix...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Information Theory

سال: 2021

ISSN: ['0018-9448', '1557-9654']

DOI: https://doi.org/10.1109/tit.2021.3065700